Can Large Language Models Estimate Public Opinion about Global Warming? An Empirical Assessment of Algorithmic Fidelity and Bias

Published in PLOS Climate, 2024

Recommended citation: Lee, S., Peng, T. Q., Goldberg, M. H., Rosenthal, S. A., Kotcher, J. E., Maibach, E. W., & Leiserowitz, A. (2024). Can large language models estimate public opinion about global warming? An empirical assessment of algorithmic fidelity and bias. PLOS Climate, 3(8), e0000429. https://www.doi.org/10.1371/journal.pclm.0000429

Abstract

Large language models (LLMs) can be used to estimate human attitudes and behavior, including measures of public opinion, a concept referred to as algorithm fidelity. This study assesses the algorithmic fidelity and bias of LLMs in estimating public opinion about global warming. LLMs were conditioned on demographics and/or psychological covariates to simulate survey responses. Findings indicate that LLMs can effectively reproduce presidential voting behaviors but not global warming attitudes unless other covariates are included. When conditioned on both demographic and psychological covariates, GPT-4 demonstrates improved accuracy, ranging from 53% to 91%, in predicting beliefs and emotions about global warming. Additionally, we find an algorithmic bias that underestimates the global warming opinions of Black Americans. While highlighting the potential of LLMs to aid social science research, these results underscore the importance of conditioning, model selection, survey question format, and bias assessment when employing LLMs for survey simulation.

Keywords

global warming; large language models; algorithmic fidelity; public opinion